terrorist content
Europe to push for one-hour takedown law for terrorist content Artificial intelligence Latest Technology News Prosyscom.tech
The European Union's executive body is doubling down on its push for platforms to pre-filter the Internet, publishing a proposal today for all websites to monitor uploads in order to be able to quickly remove terrorist uploads. The Commission handed platforms an informal one-hour rule for removing terrorist content back in March. It's now proposing turning that into a law to prevent such content spreading its violent propaganda over the Internet. For now the'rule of thumb' regime continues to apply. But it's putting meat on the bones of its thinking, fleshing out a more expansive proposal for a regulation aimed at "preventing the dissemination of terrorist content online".
- North America > United States (0.15)
- Europe > Germany (0.05)
- Law Enforcement & Public Safety > Terrorism (1.00)
- Government > Regional Government > Europe Government (0.68)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Networks (0.56)
Why Won't Facebook Talk About How Often Its Algorithms Are Wrong?
Two weeks ago Facebook released yet another glossy marketing infographic site and video touting how its state of the art technology, top engineers and teams of experts have made massive strides in conquering yet another scourge of the online world through the power of advanced algorithms. This past week its EMEA counterterrorism lead announced that its algorithms were now deleting 99% of all ISIS and al-Qaida terrorism content across the site. As with all of Facebook's announcements to date, neither of these proclamations made any mention of how often the algorithms that increasingly control its platform are wrong and whether they are actually right more often than they are wrong. After initially promising to provide a response, the company once again declined to comment on the false positive rates of its algorithms or why despite repeated requests it continues to refuse to release those numbers. Why is the company so afraid to talk about whether its algorithms are actually accurate?
- Information Technology > Services (0.98)
- Law Enforcement & Public Safety > Terrorism (0.60)
The Problem With Using AI To Fight Terrorism On Social Media
Social media has a terrorism problem. From Twitter's famous 2015 letter to Congress that it would never restrict the right of terrorists to use its platform, to its rapid about-face in the face of public and governmental outcry, Silicon Valley has had a change of heart in how it sees its role in curbing the use of its tools by those who wish to commit violence across the world. Today Facebook released a new transparency report that emphasizes its efforts to combat terroristic use of its platform and the role AI is playing in what it claims are significant successes. Yet, that narrative of AI success has been increasingly challenged, from academic studies suggesting that not only is content not being deleted, but that other Facebook tools may actually be assisting terrorists, to a Bloomberg piece last week that demonstrates just how readily terrorist content can still be found on Facebook. Can we really rely on AI to curb terroristic use of social media?
- Law Enforcement & Public Safety > Terrorism (1.00)
- Information Technology > Services (1.00)
AI is an excuse for Facebook to keep messing up
Over the course of an accumulated 10 hours spread out over two days of hearings, Mark Zuckerberg dodged question after question by citing the power of artificial intelligence. It's not even entirely clear what Zuckerberg means by "AI" here. He repeatedly brought up how Facebook's detection systems automatically take down 99 percent of "terrorist content" before any kind of flagging. In 2017, Facebook announced that it was "experimenting" with AI to detect language that "might be advocating for terrorism" -- presumably a deep learning technique. It's not clear that deep learning is actually part of Facebook's automated system.
- North America > United States (0.15)
- Asia > Myanmar (0.05)
New AI technology used by UK government to fight extremist content
The UK Home Office on Monday unveiled a £600,000 artificial intelligence (AI) tool to automatically detect terrorist content. The Home Office cited tests that show the new tool can automatically detect 94% of Daesh propaganda with 99.995% accuracy. That accuracy rate translates into only 50 out of one million randomly selected videos that would require human review. The tool can run on any platform and can integrate into the video upload process to stop most extremist content before it ever reaches the internet. The tool was developed by the Home Office and ASI Data Science.
- Europe > United Kingdom (1.00)
- North America > United States > California (0.08)
Google is using artificial intelligence to fight terrorist propaganda Access AI
Yesterday (18/06) Google detailed the measures it's implementing to combat terrorism online. Outlining its plans in an op-ed in the Financial Times Kent Walker of the General Counsel said'there should be no place for terrorist content' on Google and YouTube. The company said that whilst it has thousands of people around the world who review and counter abuse of its platforms, it acknowledges that "more needs to be done. Google plans to increase its use of technology to help identify extremist and terrorism-related videos. But performing this contextual analysis can be complicated, whereas the BBC may be reporting on a terrorist attack an individual could upload a not dissimilar video glorifying it.
Can Social Media And Artificial Intelligence Stop Terror By Using AI?
This article originally appeared on the Motley Fool. Once upon a time, terrorists used bombs, machetes, and bullets to get their message across. While that's still the case, modern day terror has a new tool at its disposal, one that it has become particularly adept and successful at deploying -- social media. This stark reality has come to light in the wake of terror campaigns that ended with participants pledging their support to their chosen causes and posting them on social-media platforms. Other insidious forms of communication and objectionable material have flourished in the internet era as well.
google-using-artificial-intelligence-fight-terrorist-propaganda
This approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can "change their minds about joining". According to the company, this method has proved with potential recruits clicking through at an "unusually high rate" and watched over half a million minutes of video content that debunks terrorist recruiting messages. This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.
- Law (1.00)
- Information Technology > Services (0.80)
Facebook Uses Artificial Intelligence to Fight Terrorism
Facebook has revealed it is using artificial intelligence in its ongoing fight to prevent terrorist propaganda from being disseminated on its platform. "We want to find terrorist content immediately, before people in our community have seen it," read the message posted Thursday. "Already, the majority of accounts we remove for terrorism we find ourselves. But we know we can do better at using technology -- and specifically artificial intelligence -- to stop the spread of terrorist content on Facebook." Some of the roles AI plays involve "image matching" to see if an uploaded image matches something previously removed because of its terrorist content. "Language understanding," the company says, will allow it to "understand text that might be advocating for terrorism."
Facebook to Use AI to Block
Amid growing pressure from governments, Facebook says it has stepped up its efforts to address the spread of "terrorist propaganda" on its service by using artificial intelligence (AI). In a blog post on Thursday, the California-based company announced the introduction of AI, including image matching and language understanding, in conjunction with it already-existing human reviewers to better identify and remove content "quickly". "We know we can do better at using technology - and specifically artificial intelligence - to stop the spread of terrorist content on Facebook," Monika Bickert, Facebook's director of global policy management, and Brian Fishman, the company's counterterrorism policy manager, said in the post. "Although our use of AI against terrorism is fairly recent, it's already changing the ways we keep potential terrorist propaganda and accounts off Facebook. "We want Facebook to be a hostile place for terrorists."
- North America > United States > California (0.26)
- Asia > Middle East > Qatar > Ad-Dawhah > Doha (0.06)
- Law Enforcement & Public Safety > Terrorism (1.00)
- Information Technology (1.00)